18 research outputs found

    Effective Target Aware Visual Navigation for UAVs

    Full text link
    In this paper we propose an effective vision-based navigation method that allows a multirotor vehicle to simultaneously reach a desired goal pose in the environment while constantly facing a target object or landmark. Standard techniques such as Position-Based Visual Servoing (PBVS) and Image-Based Visual Servoing (IBVS) in some cases (e.g., while the multirotor is performing fast maneuvers) do not allow to constantly maintain the line of sight with a target of interest. Instead, we compute the optimal trajectory by solving a non-linear optimization problem that minimizes the target re-projection error while meeting the UAV's dynamic constraints. The desired trajectory is then tracked by means of a real-time Non-linear Model Predictive Controller (NMPC): this implicitly allows the multirotor to satisfy both the required constraints. We successfully evaluate the proposed approach in many real and simulated experiments, making an exhaustive comparison with a standard approach.Comment: Conference paper at "European Conference on Mobile Robotics" (ECMR) 201

    Automatic Model Based Dataset Generation for Fast and Accurate Crop and Weeds Detection

    Full text link
    Selective weeding is one of the key challenges in the field of agriculture robotics. To accomplish this task, a farm robot should be able to accurately detect plants and to distinguish them between crop and weeds. Most of the promising state-of-the-art approaches make use of appearance-based models trained on large annotated datasets. Unfortunately, creating large agricultural datasets with pixel-level annotations is an extremely time consuming task, actually penalizing the usage of data-driven techniques. In this paper, we face this problem by proposing a novel and effective approach that aims to dramatically minimize the human intervention needed to train the detection and classification algorithms. The idea is to procedurally generate large synthetic training datasets randomizing the key features of the target environment (i.e., crop and weed species, type of soil, light conditions). More specifically, by tuning these model parameters, and exploiting a few real-world textures, it is possible to render a large amount of realistic views of an artificial agricultural scenario with no effort. The generated data can be directly used to train the model or to supplement real-world images. We validate the proposed methodology by using as testbed a modern deep learning based image segmentation architecture. We compare the classification results obtained using both real and synthetic images as training data. The reported results confirm the effectiveness and the potentiality of our approach.Comment: To appear in IEEE/RSJ IROS 201

    Plane extraction for indoor place recognition

    Get PDF
    In this paper, we present an image based plane extraction method well suited for real-time operations. Our approach exploits the assumption that the surrounding scene is mainly composed by planes disposed in known directions. Planes are detected from a single image exploiting a voting scheme that takes into account the vanishing lines. Then, candidate planes are validated and merged using a region grow- ing based approach to detect in real-time planes inside an unknown in- door environment. Using the related plane homographies is possible to remove the perspective distortion, enabling standard place recognition algorithms to work in an invariant point of view setup. Quantitative Ex- periments performed with real world images show the effectiveness of our approach compared with a very popular method

    An Effective Multi-Cue Positioning System for Agricultural Robotics

    Get PDF
    The self-localization capability is a crucial component for Unmanned Ground Vehicles (UGV) in farming applications. Approaches based solely on visual cues or on low-cost GPS are easily prone to fail in such scenarios. In this paper, we present a robust and accurate 3D global pose estimation framework, designed to take full advantage of heterogeneous sensory data. By modeling the pose estimation problem as a pose graph optimization, our approach simultaneously mitigates the cumulative drift introduced by motion estimation systems (wheel odometry, visual odometry, ...), and the noise introduced by raw GPS readings. Along with a suitable motion model, our system also integrates two additional types of constraints: (i) a Digital Elevation Model and (ii) a Markov Random Field assumption. We demonstrate how using these additional cues substantially reduces the error along the altitude axis and, moreover, how this benefit spreads to the other components of the state. We report exhaustive experiments combining several sensor setups, showing accuracy improvements ranging from 37% to 76% with respect to the exclusive use of a GPS sensor. We show that our approach provides accurate results even if the GPS unexpectedly changes positioning mode. The code of our system along with the acquired datasets are released with this paper.Comment: Accepted for publication in IEEE Robotics and Automation Letters, 201

    Non-Linear Model Predictive Control with Adaptive Time-Mesh Refinement

    Full text link
    In this paper, we present a novel solution for real-time, Non-Linear Model Predictive Control (NMPC) exploiting a time-mesh refinement strategy. The proposed controller formulates the Optimal Control Problem (OCP) in terms of flat outputs over an adaptive lattice. In common approximated OCP solutions, the number of discretization points composing the lattice represents a critical upper bound for real-time applications. The proposed NMPC-based technique refines the initially uniform time horizon by adding time steps with a sampling criterion that aims to reduce the discretization error. This enables a higher accuracy in the initial part of the receding horizon, which is more relevant to NMPC, while keeping bounded the number of discretization points. By combining this feature with an efficient Least Square formulation, our solver is also extremely time-efficient, generating trajectories of multiple seconds within only a few milliseconds. The performance of the proposed approach has been validated in a high fidelity simulation environment, by using an UAV platform. We also released our implementation as open source C++ code.Comment: In: 2018 IEEE International Conference on Simulation, Modeling, and Programming for Autonomous Robots (SIMPAR 2018

    AgriColMap: Aerial-Ground Collaborative 3D Mapping for Precision Farming

    Full text link
    The combination of aerial survey capabilities of Unmanned Aerial Vehicles with targeted intervention abilities of agricultural Unmanned Ground Vehicles can significantly improve the effectiveness of robotic systems applied to precision agriculture. In this context, building and updating a common map of the field is an essential but challenging task. The maps built using robots of different types show differences in size, resolution and scale, the associated geolocation data may be inaccurate and biased, while the repetitiveness of both visual appearance and geometric structures found within agricultural contexts render classical map merging techniques ineffective. In this paper we propose AgriColMap, a novel map registration pipeline that leverages a grid-based multimodal environment representation which includes a vegetation index map and a Digital Surface Model. We cast the data association problem between maps built from UAVs and UGVs as a multimodal, large displacement dense optical flow estimation. The dominant, coherent flows, selected using a voting scheme, are used as point-to-point correspondences to infer a preliminary non-rigid alignment between the maps. A final refinement is then performed, by exploiting only meaningful parts of the registered maps. We evaluate our system using real world data for 3 fields with different crop species. The results show that our method outperforms several state of the art map registration and matching techniques by a large margin, and has a higher tolerance to large initial misalignments. We release an implementation of the proposed approach along with the acquired datasets with this paper.Comment: Published in IEEE Robotics and Automation Letters, 201

    Multi-Spectral Image Synthesis for Crop/Weed Segmentation in Precision Farming

    Full text link
    An effective perception system is a fundamental component for farming robots, as it enables them to properly perceive the surrounding environment and to carry out targeted operations. The most recent approaches make use of state-of-the-art machine learning techniques to learn an effective model for the target task. However, those methods need a large amount of labelled data for training. A recent approach to deal with this issue is data augmentation through Generative Adversarial Networks (GANs), where entire synthetic scenes are added to the training data, thus enlarging and diversifying their informative content. In this work, we propose an alternative solution with respect to the common data augmentation techniques, applying it to the fundamental problem of crop/weed segmentation in precision farming. Starting from real images, we create semi-artificial samples by replacing the most relevant object classes (i.e., crop and weeds) with their synthesized counterparts. To do that, we employ a conditional GAN (cGAN), where the generative model is trained by conditioning the shape of the generated object. Moreover, in addition to RGB data, we take into account also near-infrared (NIR) information, generating four channel multi-spectral synthetic images. Quantitative experiments, carried out on three publicly available datasets, show that (i) our model is capable of generating realistic multi-spectral images of plants and (ii) the usage of such synthetic images in the training process improves the segmentation performance of state-of-the-art semantic segmentation Convolutional Networks.Comment: Submitted to Robotics and Autonomous System

    Building an Aerial-Ground Robotics System for Precision Farming: An Adaptable Solution

    Full text link
    The application of autonomous robots in agriculture is gaining increasing popularity thanks to the high impact it may have on food security, sustainability, resource use efficiency, reduction of chemical treatments, and the optimization of human effort and yield. With this vision, the Flourish research project aimed to develop an adaptable robotic solution for precision farming that combines the aerial survey capabilities of small autonomous unmanned aerial vehicles (UAVs) with targeted intervention performed by multi-purpose unmanned ground vehicles (UGVs). This paper presents an overview of the scientific and technological advances and outcomes obtained in the project. We introduce multi-spectral perception algorithms and aerial and ground-based systems developed for monitoring crop density, weed pressure, crop nitrogen nutrition status, and to accurately classify and locate weeds. We then introduce the navigation and mapping systems tailored to our robots in the agricultural environment, as well as the modules for collaborative mapping. We finally present the ground intervention hardware, software solutions, and interfaces we implemented and tested in different field conditions and with different crops. We describe a real use case in which a UAV collaborates with a UGV to monitor the field and to perform selective spraying without human intervention.Comment: Published in IEEE Robotics & Automation Magazine, vol. 28, no. 3, pp. 29-49, Sept. 202

    Sviluppo di un distema di navigazione visuale per veicoli multirotire

    No full text
    MAVs are naturally unstable platforms exhibiting great agility and they thus require a trained pilot to operate them. Industrial inspection with MAVs is a challenging task, and the ability to hover with high accuracy and stability is a key issue. In addition, payload on micro aerial vehicles is very limited, and the use of lightweight sensing equipment directly results in longer flight duration or the ability to carry additional payload. This motivates the use of vision sensors for control. In this thesis, a modular framework that allows the user to test different components of a stereo vision based pose estimation algorithm has been created. Using this framework, different approaches and different implementations for the steps of the algorithm have been tested. As a result, two approaches using SURF and SIFT features and descriptors based matchers are compared and a solution for the inspection of large industrial building

    Perception and environment modeling in robotic agriculture contexts

    No full text
    Precision Agriculture (PA) is now a term used throughout the agricultural domain worldwide. It gained popularity and increasing interest from the research community due to the wide range of potential benefits and to the availability of new off-the-shelf sensing technologies. PA methods, indeed, promise to increase the quantity and quality of agricultural outputs, while using less input (e.g., water, energy, fertilizers, pesticides, . . . ). The aim is to save costs, reduce environmental impact and produce more and better food. In this domain, a promising solution that is rapidly growing up is robotic farming. By combining the aerial survey capabilities of Unmanned Aerial Vehicles (UAVs) with multi-purpose agricultural Unmanned Ground Vehicles (UGVs), a robotic system will be able to survey a field from the air, perform a targeted intervention on the ground, and provide detailed information for decision support, all with minimal user intervention. In the last years, despite great progress in automating farming activities by using robotic platforms, most of the existing systems do not provide a sufficient autonomy level. Making farming robots more autonomous brings the benefits of completing tasks faster and adapting to different purposes and farm fields, which make them more useful and increase their profitability. However, making farming robots more autonomous involves increasing their perception and awareness of their surrounding environment. A typical agricultural scenario presents unique characteristics, such as highly repetitive visual and geometrical patterns, and the lack of distinguishable landmarks. These features do not allow to directly apply most of the state-of-the-art perception methods from other robotic domains. This thesis focuses on perception methods that enable robots to autonomously operate in farming environments, specifically a localization method and a collaborative mapping between aerial and ground robots. They improve the robot perception capabilities by exploiting the unique context-based characteristics of farm fields and by fusing together several heterogeneous sensors. Additionally, this thesis addresses the problem of crop/weed mapping by employing end-to-end visual classifiers. This thesis also presents contributions in perception-based control methods. Such approaches allow the robot to navigate the environment while taking into account the perception constraints. The following is a full list of contributions: • Development of crop/weed detection and classification algorithms based on deep neural networks. • A method to summarize a big dataset by information entropy maximization. The manual annotation of the summarized dataset allows the trained network to obtain a similar classification accuracy while sensibly reducing the manual annotation effort. • A model-based dataset generation method for crop and weed detection. The generated data can be used to both augment or to supplement a real-world training dataset. The synthetic data are made available as open-source. • A multi-cue positioning system for ground farming robots that fuses several heterogeneous sensors and incorporates context-based characteristics. • A novel multimodal environment representation that at the same time enhances the key characteristics of the farm field, while filtering out redundant information. • A collaborative mapping method that registers maps acquired by both aerial and ground vehicles. • Perception-based control methods that steer the robot to the desired location while satisfying perception constraints. • A novel temporal registration method that registers maps over time to monitor the evolution of the farm field (work in progress). Moreover, another important outcome of this thesis is a set of open-source software modules released and datasets generated, which I hope the community will benefit from. The work developed in this thesis has been done following the operating scenario proposed by the Flourish project, in which Sapienza, University of Rome, participated as a consortium partner
    corecore